Numerous models have tried to effectively embed knowledge graphs in low dimensions. Among the state-of-the-art methods, Graph Neural Network (GNN) models provide structure-aware representations of knowledge graphs. However, they often utilize the information of relations and their interactions with entities inefficiently. Moreover, most state-of-the-art knowledge graph embedding models suffer from scalability issues because of assigning high-dimensional embeddings to entities and relations. To address the above limitations, we propose a scalable general knowledge graph encoder that adaptively involves a powerful tensor decomposition method in the aggregation function of RGCN, a well-known relational GNN model. Specifically, the parameters of a low-rank core projection tensor, used to transform neighborhood entities in the encoder, are shared across relations to benefit from multi-task learning and incorporate relations information effectively. Besides, we propose a low-rank estimation of the core tensor using CP decomposition to compress the model, which is also applicable, as a regularization method, to other similar linear models. We evaluated our model on knowledge graph completion as a common downstream task. We train our model for using a new loss function based on contrastive learning, which relieves the training limitation of the 1-N method on huge graphs. We improved RGCN performance on FB15-237 by 0.42% with considerably lower dimensionality of embeddings.
translated by 谷歌翻译
Gaussian Mixture Models (GMM) are one of the most potent parametric density estimators based on the kernel model that finds application in many scientific domains. In recent years, with the dramatic enlargement of data sources, typical machine learning algorithms, e.g. Expectation Maximization (EM), encounters difficulty with high-dimensional and streaming data. Moreover, complicated densities often demand a large number of Gaussian components. This paper proposes a fast online parameter estimation algorithm for GMM by using first-order stochastic optimization. This approach provides a framework to cope with the challenges of GMM when faced with high-dimensional streaming data and complex densities by leveraging the flexibly-tied factorization of the covariance matrix. A new stochastic Manifold optimization algorithm that preserves the orthogonality is introduced and used along with the well-known Euclidean space numerical optimization. Numerous empirical results on both synthetic and real datasets justify the effectiveness of our proposed stochastic method over EM-based methods in the sense of better-converged maximum for likelihood function, fewer number of needed epochs for convergence, and less time consumption per epoch.
translated by 谷歌翻译
为了克服多个对象跟踪任务中的挑战,最近的算法将交互线索与运动和外观特征一起使用。这些算法使用图形神经网络或变压器来提取导致高计算成本的交互功能。在本文中,提出了一种基于几何特征的新型交互提示,旨在检测遮挡和重新识别计算成本低的丢失目标。此外,在大多数算法中,摄像机运动被认为可以忽略不计,这是一个强有力的假设,并不总是正确的,并且导致目标转换或目标不匹配。在本文中,提出了一种测量相机运动和删除其效果的方法,可有效地降低相机运动对跟踪的影响。该算法在MOT17和MOT20数据集上进行了评估,并在MOT20上实现了MOT17的最先进性能和可比较的结果。该代码也可以公开使用。
translated by 谷歌翻译
基于多种假设,现实世界中的数据通常位于低维的流形上,而将流动作为基于可能性的生成模型的标准化是由于其结构约束而无法找到这种歧管的能力。因此,出现了一个有趣的问题:$ \ textit {“我们可以在标准化流程中找到数据的子manifold(s),并估计子序列上的数据密度吗?”} $。在本文中,我们介绍了两种方法,即每像素的惩罚对数类样和等级培训,以回答上述问题。我们提出了一种单步方法,用于通过将流量标准化为歧管和偏移部分获得的转换空间,来进行关节流形学习和密度估计。这是由每像素惩罚的可能性函数来完成数据的,以学习数据的子字符。标准化流程假设转换的数据是高斯化的,但是这种施加的假设不一定是正确的,尤其是在高维度中。为了解决这个问题,采用了一种分层培训方法来改善子序列的密度估计。结果验证了在产生的图像质量和可能性方面使用归一化流的同时流动学习和密度估算中提出方法的优越性。
translated by 谷歌翻译
最近,链接预测问题,也称为知识图完成,已经吸引了大量的研究。即使最近的型号很少试图通过在低维度中嵌入知识图表来实现相对良好的性能,即目前最先进的模型的最佳结果是以大大提高嵌入的维度的成本赚取的。然而,这导致在巨大知识库的情况下导致过度舒服和更重要的可扩展性问题。灵感灵感来自变压器模型的变体提供的深度学习的进步,因为它的自我关注机制,在本文中,我们提出了一种基于IT的模型来解决上述限制。在我们的模型中,自我关注是将查询依赖预测应用于实体和关系的关键,并捕获它们之间的相互信息,以获得来自低维嵌入的高度富有表现力的表现。两种标准链路预测数据集,FB15K-237和WN18RR的经验结果表明,我们的模型比我们三个最近最近期的最新竞争对手实现了相当的性能或更好的性能,其维度的重大减少了76.3%平均嵌入。
translated by 谷歌翻译
通常,语音处理模型包括语言模型以及声学模型。无论语言模型的复杂性和变体如何,语言模型需要三个关键的预处理步骤:清洁,标准化和标记。在提到的步骤中,归一化步骤对于在纯文本应用程序中格式化统一是必要的。然而,对于语音处理模块中的嵌入式语言模型,归一化不限于格式化统一。此外,它必须将每个可读符号,数字等转换为它们的发音方式。据我们所知,语音处理模块中没有用于嵌入式语言模型的波斯标准化工具包,因此在本文中,我们提出了一个用于语音应用程序中的文本处理的开源归一化工具包。简而言之,我们考虑不同的可读波斯文,如符号(常见的货币,#,@,URL等),数字(日期,时间,电话号码,国家代码等)等。与其他可用波斯文本规范化工具的比较表明了语音处理中提出的方法的优越性。此外,将模型的性能与其他常见的自然语言库(如HATM和Parsivar)的其他常见的自然语言库进行比较,指示所提出的方法的正确性能。此外,它对一些波斯维基百科数据的评估证实了该方法的适当性能。
translated by 谷歌翻译
Differentiable Architecture Search (DARTS) has attracted considerable attention as a gradient-based Neural Architecture Search (NAS) method. Since the introduction of DARTS, there has been little work done on adapting the action space based on state-of-art architecture design principles for CNNs. In this work, we aim to address this gap by incrementally augmenting the DARTS search space with micro-design changes inspired by ConvNeXt and studying the trade-off between accuracy, evaluation layer count, and computational cost. To this end, we introduce the Pseudo-Inverted Bottleneck conv block intending to reduce the computational footprint of the inverted bottleneck block proposed in ConvNeXt. Our proposed architecture is much less sensitive to evaluation layer count and outperforms a DARTS network with similar size significantly, at layer counts as small as 2. Furthermore, with less layers, not only does it achieve higher accuracy with lower GMACs and parameter count, GradCAM comparisons show that our network is able to better detect distinctive features of target objects compared to DARTS.
translated by 谷歌翻译
Recent advances in language modeling have enabled new conversational systems. In particular, it is often desirable for people to make choices among specified options when using such systems. We address the problem of reference resolution, when people use natural expressions to choose between real world entities. For example, given the choice `Should we make a Simnel cake or a Pandan cake?' a natural response from a non-expert may be indirect: `let's make the green one'. Reference resolution has been little studied with natural expressions, thus robustly understanding such language has large potential for improving naturalness in dialog, recommendation, and search systems. We create AltEntities (Alternative Entities), a new public dataset of entity pairs and utterances, and develop models for the disambiguation problem. Consisting of 42K indirect referring expressions across three domains, it enables for the first time the study of how large language models can be adapted to this task. We find they achieve 82%-87% accuracy in realistic settings, which while reasonable also invites further advances.
translated by 谷歌翻译
The problem of reversing the compilation process, decompilation, is an important tool in reverse engineering of computer software. Recently, researchers have proposed using techniques from neural machine translation to automate the process in decompilation. Although such techniques hold the promise of targeting a wider range of source and assembly languages, to date they have primarily targeted C code. In this paper we argue that existing neural decompilers have achieved higher accuracy at the cost of requiring language-specific domain knowledge such as tokenizers and parsers to build an abstract syntax tree (AST) for the source language, which increases the overhead of supporting new languages. We explore a different tradeoff that, to the extent possible, treats the assembly and source languages as plain text, and show that this allows us to build a decompiler that is easily retargetable to new languages. We evaluate our prototype decompiler, Beyond The C (BTC), on Go, Fortran, OCaml, and C, and examine the impact of parameters such as tokenization and training data selection on the quality of decompilation, finding that it achieves comparable decompilation results to prior work in neural decompilation with significantly less domain knowledge. We will release our training data, trained decompilation models, and code to help encourage future research into language-agnostic decompilation.
translated by 谷歌翻译
The process of screening molecules for desirable properties is a key step in several applications, ranging from drug discovery to material design. During the process of drug discovery specifically, protein-ligand docking, or chemical docking, is a standard in-silico scoring technique that estimates the binding affinity of molecules with a specific protein target. Recently, however, as the number of virtual molecules available to test has rapidly grown, these classical docking algorithms have created a significant computational bottleneck. We address this problem by introducing Deep Surrogate Docking (DSD), a framework that applies deep learning-based surrogate modeling to accelerate the docking process substantially. DSD can be interpreted as a formalism of several earlier surrogate prefiltering techniques, adding novel metrics and practical training practices. Specifically, we show that graph neural networks (GNNs) can serve as fast and accurate estimators of classical docking algorithms. Additionally, we introduce FiLMv2, a novel GNN architecture which we show outperforms existing state-of-the-art GNN architectures, attaining more accurate and stable performance by allowing the model to filter out irrelevant information from data more efficiently. Through extensive experimentation and analysis, we show that the DSD workflow combined with the FiLMv2 architecture provides a 9.496x speedup in molecule screening with a <3% recall error rate on an example docking task. Our open-source code is available at https://github.com/ryienh/graph-dock.
translated by 谷歌翻译